perm filename HEWITT.1[S87,JMC] blob sn#841819 filedate 1987-06-21 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00006 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	\input memo.tex[let,jmc]
C00024 00003	\smallskip\centerline{Copyright \copyright\ \number\year\ by John McCarthy}
C00025 00004	The content of this paper divides into    parts.
C00027 00005	The intuitions
C00028 00006
C00032 ENDMK
C⊗;
\input memo.tex[let,jmc]
\title{Comments on Carl Hewitt's ``Organizational Semantics''}

	I shall comment on several aspects of the paper.

	1. As a distinct approach to AI, the approach proposed in this
paper is rather undeveloped.  Nevertheless, its presentation at this time
may be quite justified, because it is entirely possible that none of the
more developed approaches will succeed in reaching human level
intelligence.  Something entirely new may be required.  Therefore, I
take a positive attitude toward Hewitt's proposals.
Instead of comparing it head-on with my favorite logic approach, I will
consider it in its own terms and discuss how it might be made better.
I confess that my ideas for its improvement involve injecting more
logic, especially non-monotonic logic.

	The intuition behind the whole approach is that correct decisions
arise from the interaction of small parts of an organized whole.  Each
part has its own internally consistent but limited view of the matter.
The limited views are mutually inconsistent in general.

	2. The term ``organizational semantics'' doesn't seem to be
explained as a semantics in the conventional mathematical sense, i.e. as a
way of assigning meanings to certain syntactic objects.  One can imagine
doing so, however.  I suppose this would involve regarding a certain class
of programs as operating via interaction of the parts of an organization,
each with its own goals and agenda, and defining the meaning of the
program in terms of this interaction.

	3. The parts of the program are to proceed by ``negotiation
and debate''.  These are never distinguished from each other.  It
reminds me that Marvin Minsky and I are mentioned several times in
the autobiography of a certain distinguished physicist, occurring
always in the phrase ``John McCarthy and Marvin Minsky''.  Marvin
and I have always imagined ourselves as distinct.

	It isn't spelled out in the paper, but I would imagine
negotiation to involve messages back and forth between two parts
of the program aimed at achieving a compromise between their
respective goals and/or views.  More complex negotiations could
occur among several entities.

	Debate, on the other hand, might involve each part expressing
arguments for its preferred action with a decision being made by
a third part of the program.

	Jon Doyle in various papers proposes ``deliberation'' as
describing the way a person considers various points of view and
the way a program should consider them.  In deliberation, it would
seem that a single entity considers successively subgroups of the
propositions at its disposal and the recommendations that follow
from the separate groups and then combines the considerations in
some way.  This agrees better with my intuition of how a person
decides complex matters and the way an AI program should decide them.
But see the final note.

	4. Hewitt considers that logic is ok in considering the
microworlds but is inadequate for the overall reasoning, because
it can't deal reasonably with contradictory information, since
contradictory information permits the deduction of arbitrary
conclusions.  He further remarks that non-monotonic logic doesn't
change this.

	I think Hewitt is mistaken about this point.  It seems to
me that ``micro-theories'' dealing with separate aspects of the
world are likely to be useful, and Hewitt's problem of how to
combine them when they suggest contradictory conclusions, especially
conclusions about what to do, is a real and interesting problem.
Whether it's the main problem of AI is another matter.

	However, the various systems of non-monotonic logic do
purport to attack the problem.  They do it by proposing that
the micro-theories be formalized as default theories or in case
of circumscription by using {\it abnormality predicates} as in
(McCarthy 1986).  Taken separately
they can propose contradictory conclusions, because abnormality
is minimized in circumscription, and taking only a small subset
of the available information into account permits making many
of the abnormality predicates false.  When bodies of information
are combined, the theories are still consistent, because the
formulating the micro-theories as default theories or using
circumscription has weakened them enough to keep them consistent.
When abnormality is minimized in the combined theory, in general
more tuples of entities have to be abnormal in the various ways.

	It can't be claimed that the various forms of non-monotonic
logic have been demonstrated to meet Hewitt's goals, but anyway
they purport to be viable approaches to meeting them.

	Let us consider Hewitt's specific example of a nuclear power
plant, but let's move the issue back to the East Coast, where it
is currently being fought over.  There are two micro-theories.
One is nuclear science and engineering combined with specific
facts about the design of the Seabrook nuclear power plant.
It has many defaults in it, including assumptions that the
laws of physics and the effects of various dosages of
radiation are understood and that the inspections purporting
to show that the plant has been built in accordance with its
design have been done correctly.  The second micro-theory,
that of Governor Dukakis, has considerable overlap of with
the first, but perhaps contains also some high priority defaults
preferring conclusions that accord with his commitments to
supporters of the Clamshell Alliance.

	Any individual resolving the issue in his own mind is going to
take into account certain subsets of these micro-theories, according to
his level of understanding and the ``salience'' to him of various
considerations.  For example, he may actually take into account
who would be offended or pleased by his reaching various conclusions
and what this would do to his relations with his friends.  We are
very far from being able to formalize all this today in logic or
in any other system.

	However, besides any circumscriptions that may be done,
there is also a process of deliberation and debate that consists,
at least partially, of making various additions, and maybe even
deletions, from the set of facts being taken into account in the
non-monotonic reasoning processes.  These additions (and deletions)
are the result of a process whose logical formulation would require
meta-reasoning.

	I certainly hope that Hewitt will be able to put his
intuitions about how this should precede in a more explicit form.

	4. The main program example involves concurrent processes.
Two people with jointly accessible bank account attempt withdrawals
that would jointly take out more money than there is in the account.
The process is formalized both in terms of guarded Horn clauses
and in terms of actors.

	The guarded Horn clause formulation is syntactically logic,
but Hewitt asserts that interpreting them as logical sentences
leads to a contradiction.  Without following the details myself,
I'll bet he's right.

	It seems to me that this is just one more illustration of the
well-known fact that logic programming in its practical forms doesn't
coincide with logic.  The question is defined as follows.  A logic
program has the form of a collection of logical sentences.  Suppose
it computes a result.  This result has the form of an assertion
that a certain tuple of objects satisfies a certain predicate.
It is natural to ask whether this assertion is a logical consequence of the
collection of sentences comprising the program.  This is the {\it soundness}
problem for the logic programming language.  It can also be asked whether
the program will find that answer whenever it is a consequence of the
sentences that certain tuple of constants satisfies the sentences of
the program.  This is the {\it completeness} problem for the language.
Pure Prolog has been proved sound though not complete, and the
completeness result has been extended to certain classes of ``stratified''
programs with negation.  Prolog with cuts is even more incomplete and as soon as
``predicates'' with side effects are added to the language, it usually
becomes unsound.

	Nevertheless, even with the unsoundness, it is sometimes
intuitively attractive to think of the program as though it were
logic, taking  into account the differences between what actually
happens and the logical consequences of the program when this is
important.  Hewitt's example of the GCH program is a case when
the logic program is unsound regarded as a logical theory.

	It may often be worthwhile to try to patch up the correspondence
between logic and logic programming.  For example, if the language
has ``predicates'' with side effects, we can imagine that the
``predicates'' really have additional arguments representing the
state, and the side effect ``predicates'' relate the values of
variables in one state with their values in a successor state.
Thus logic programmers often take advantage of the fact that
proving something true and taking an action that makes it true
sometimes have analogous logical properties.
Some such interpretation may also work for the guarded Horn
clause language.

	My own preference is to contemplate programs that work with
collections of logical sentences with sound interpretations rather than to
patch up logic programming.  Anyway logic progamming is limited in what
logical sentences it can interpret.  However, I agree that logic
programming is worthwhile, and patching up its semantics is also
worthwhile.  For this reason, I cannot attach great significance to
Hewitt's discovery of another example of a logic program which is
inconsistent regarded as a colection of logical sentences.

	It may also be useful to regard logic programs of various kinds
as objects, e.g. as quoted S-expressions and then have theory of their
interpretations expressed in logic.  For example, a logical theory of
logic programs with ``predicates'' that made sentences become true as
well as proved them could itself be expressed in a formalism in which
states of computation were explicit objects.  While the logic program
itself might be inconsistent regarded as a collection of logical sentences,
there would be a proper logical theory of its behavior when executed.
It would be interesting if the pseudo-logical form of the logic program
resulted in simplifications in the logical theory of its behavior.

\noindent {\bf References}

	The final version of these comments will contain references ---
to circumscription, to Jon Doyle's discussion of deliberation
and to some of the attempts to extend the semantics of logic programs.

\noindent {\bf Note: Circumscriptive treatment of the bad check example}

	Vladimir Lifschitz remarks that the example of the
checking account may be approximated using circumscription as follows. We
write the axioms

$$badcheck(Check1) ∧ badcheck(Check2)$$
$$ ⊃ newbalance=oldbalance,$$
$$¬badcheck(Check1) ∧ badcheck(Check2)$$
$$ ⊃ newbalance=oldbalance-amount(Check1),$$
$$badcheck(Check1) ∧ ¬badcheck(Check2)$$
$$ ⊃ newbalance=oldbalance-amount(Check2),$$
$$¬badcheck(Check1) ∧ ¬badcheck(Check2)$$
$$ ⊃ newbalance=oldbalance-amount(Check1)$$
$$ -amount(Check2),$$
$$newbalance≥0,$$
$$amount(Check1)=70,$$
$$amount(Check2)=80,$$
and
$$oldbalance=100.$$

If we add some arithmetic axioms to these
sentences and circumscribe $badcheck$ with $newbalance$ varied,
then there will be two minimal models, with one bad check in each.

\noindent {\bf Note: Debate and Negotiation}

	I share Hewitt's intuition that some internal decision making
processes are analogous to debate and negotiation.

	We take as our notion of debate a process of making a decision
between two alternatives, e.g. alternative actions $a1$ and $a2$.  A key
phrase in a debate is, ``Yes, but $\ldots$''.  Using circumscription and
and abnormality theory, this can be modeled as follows.  Debater D1 makes
an assertion that has the effect of supporting his proposed outcome $a1$
provided abnormality is circumscribed taking the agreed facts $\alpha$
into account plus his assertion $p1$.  Debater B's ``yes, but $\ldots$''
adds an assertion $p2$ whose effect is to cancel one of A's default cases.
Thus circumscribing abnormality in $α ∪ \{p1\}$ entails $ShouldDo(a1)$,
while circumscribing abnormality in $α ∪ \{p1,p2\}$ entails $ShouldDo(a2)$.
D1 can now propose an additional assertion that will switch the decision
back.  It seems to me that internal debate is more likely to follow this
model than real debate, becaue denying or completely ignoring assertions
is less likely in internal debate.

	Also this model seems better to me than isolated micro-theories,
because it proposes a way for the debaters' assertions to interact.

	Negotiation is different.  Here it seems that instead of two
alternative decisions, there are two objectives to be optimized, and
some compromise between them has to be reached.  Off hand, it seems
likely that formalization of this process should involve quantitative
considerations, so that it can be stated how much of one desideratum
is being traded off for how much of the other.  The process of
real two party negotiation seems quite different from a process of
an individual compromising his objectives.  This is because two
party negotiation often involves threats, implicit or explicit about
what each party will do if agreement is not reached.  Only if the
two parties have a common overall objective does it become like
the decision of a single party.

	Anyway I don't have a logical model of negotiation to propose
at present.
\smallskip\centerline{Copyright \copyright\ \number\year\ by John McCarthy}
\smallskip\noindent{This draft of HEWITT.1[S87,JMC]\ TEXed on \jmcdate\ at \theTime}
\vfill\eject\end
The content of this paper divides into    parts.

1. A proposal for ``organizational semantics'' of programs that deal with 
inconsistent beliefs and conflicting proposals for action. Some desiderata are
mentioned, but no definite proposals are made. In fact, the phrase 
``organizational semantics'' occurs only in the abstract and the introduction.

2. An assertion that the use of logicin AI is limited to ``microtheories,''
because logic cannot deal with the contracdictory microtheories that powerful
AI systems need to use. It is asserted that non-monotonic logic is covered
by the same remarks. This is mistaken, because cicumscription purport to deal
with this problem by weakening the theories enough to restore consistency and 
then getting conclusions as strong as possible by minimizing something, e.g.
abnormality. 

Hewitt's Diablo Canyon example might be handled by axioms

$\neg stupid-fanatics(Ablone-Alliance) \supset <theory1>$

where 

$<theory> \vdash \neg safe (Diablo-Canyon) (Governor-Dukakis) \supset
<theory1> \neg political-appointment (Governor Dukakis)$

\end
The intuitions

1. arguing entities

2. negotiation and debate

3. connection to logic

4. Non-monotonic logic indeed purports to solve the problem.

5. Organizational semantics
to explain behavior as the outcome of negotiation and debate within
an organization.